On Derivation of MLP Backpropagation from the Kelley-Bryson Optimal-Control Gradient Formula and Its Application

نویسندگان

  • Eiji Mizutani
  • Stuart E. Dreyfus
  • Kenichi Nishio
چکیده

The well-known backpropagation (BP) derivative computation process for multilayer perceptrons (MLP) learning can be viewed as a simplified version of the Kelley-Bryson gradient formula in the classical discrete-time optimal control theory [1]. We detail the derivation in the spirit of dynamic programming, showing how they can serve to implement more elaborate learning whereby teacher signals can be presented to any nodes at any hidden layers, as well as at the terminal output layer. We illustrate such an elaborate training scheme using a small-scale industrial problem as a concrete example, in which some hidden nodes are taught to produce specified target values. In this context, part of the hidden layer is no longer “hidden.”

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Overfitting and Neural Networks: Conjugate Gradient and Backpropagation

Methods for controlling the bias/variance tradeoff typically assume that overfitting or overtraining is a global phenomenon. For multi-layer perceptron (MLP) neural networks, global parameters such as the training time (e.g. based on validation tests), network size, or the amount of weight decay are commonly used to control the bias/variance tradeoff. However, the degree of overfitting can vary...

متن کامل

On-line Adaptive Learning Rate Bp Algorithm for Mlp and Application to an Identification Problem

An on-line algorithm that uses an adaptive learning rate is proposed. Its development is based on the analysis of the convergence of the conventional gradient descent method for threelayer BP neural networks. The effectiveness of the proposed algorithm applied to the identification and prediction of behavior of non-linear dynamic systems is demonstrated by simulation experiments.

متن کامل

Optimal integrated passive/active design of the suspension system using iteration on the Lyapunov equations

In this paper, an iterative technique is proposed to solve linear integrated active/passive design problems. The optimality of active and passive parts leads to the nonlinear algebraic Riccati equation due to the active parameters and some associated additional Lyapunov equations due to the passive parameters. Rather than the solution of the nonlinear algebraic Riccati equation, it is proposed ...

متن کامل

Numerical Solution of Optimal Heating of Temperature Field in Uncertain Environment Modelled by the use of Boundary Control

‎In the present paper‎, ‎optimal heating of temperature field which is modelled as a boundary optimal control problem‎, ‎is investigated in the uncertain environments and then it is solved numerically‎. ‎In physical modelling‎, ‎a partial differential equation with stochastic input and stochastic parameter are applied as the constraint of the optimal control problem‎. ‎Controls are implemented ...

متن کامل

A Study on Neural Network Training Algorithm for Multiface Detection in Static Images

This paper reports the study results on neural network training algorithm of numerical optimization techniques multiface detection in static images. The training algorithms involved are scale gradient conjugate backpropagation, conjugate gradient backpropagation with Polak-Riebre updates, conjugate gradient backpropagation with Fletcher-Reeves updates, one secant backpropagation and resilent ba...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2000